Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing warning after upgrading tagger to use CUDAExecutionprovider instead of CPUExecutionprovider in attempt to improve performance speed. #63

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

andrewtvuong
Copy link

Please see #62

This pull request seeks help in fixing the following message after upgrading to use CUDAExecutionprovider instead of CPUExecutionprovider.

[W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 12 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.

Warning seems to indicate a bottleneck caused by GPU/CPU data transfer by memcpy.

…to the graph main_graph for CUDAExecutionProvider"
…to the graph main_graph for CUDAExecutionProvider"
@piotr-sikora-v
Copy link

try to use this, but I have an error:

Error occurred when executing WD14Tagger|pysssss:

Can't allocate memory on the CUDA device using this package of OnnxRuntime. Please use the CUDA package of OnnxRuntime to use this feature.

File "/opt/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/opt/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/opt/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/opt/ComfyUI/custom_nodes/ComfyUI-WD14-Tagger/wd14tagger.py", line 208, in tag
tags.append(wait_for_async(lambda: tag(image, model, threshold, character_threshold, exclude_tags, replace_underscore, trailing_comma)))
File "/opt/ComfyUI/custom_nodes/ComfyUI-WD14-Tagger/pysssss.py", line 214, in wait_for_async
loop.run_until_complete(run_async())
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/opt/ComfyUI/custom_nodes/ComfyUI-WD14-Tagger/pysssss.py", line 204, in run_async
r = await async_fn()
File "/opt/ComfyUI/custom_nodes/ComfyUI-WD14-Tagger/wd14tagger.py", line 84, in tag
ort_inputs = {input_name: ort.OrtValue.ortvalue_from_numpy(image, 'cuda')}
File "/opt/ComfyUI/venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 695, in ortvalue_from_numpy
C.OrtValue.ortvalue_from_numpy(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants